- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
12
- Author / Contributor
- Filter by Author / Creator
-
-
Fox, Roy (2)
-
Lanier, JB (2)
-
A. Sharma (1)
-
Aaron B. Zimmerman (1)
-
Aaron Buikema (1)
-
Aaron D. Viets (1)
-
Aaron M. Holgado (1)
-
Aaron Markowitz (1)
-
Aaron W. Jones (1)
-
Aashish Tripathee (1)
-
Abhirup Ghosh (1)
-
Abhishek Parida (1)
-
Adam A. Mercer (1)
-
Adam K. Zadrożny (1)
-
Adam Kutynia (1)
-
Adam Mullavey (1)
-
Adam Zadrożny (1)
-
Adrian Macquet (1)
-
Agata Trovato (1)
-
Aidan F. Brooks (1)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Model-Based Reinforcement Learning (MBRL) has shown promise in visual control tasks due to its data efficiency. However, training MBRL agents to develop generalizable perception remains challenging, especially amid visual distractions that introduce noise in representation learning. We introduce Segmentation Dreamer (SD), a framework that facilitates representation learning in MBRL by incorporating a novel auxiliary task. Assuming that task-relevant components in images can be easily identified with prior knowledge in a given task, SD uses segmentation masks on image observations to reconstruct only task-relevant regions, reducing representation complexity. SD can leverage either ground-truth masks available in simulation or potentially imperfect segmentation foundation models. The latter is further improved by selectively applying the image reconstruction loss to mitigate misleading learning signals from mask prediction errors. In modified DeepMind Control suite and Meta-World tasks with added visual distractions, SD achieves significantly better sample efficiency and greater final performance than prior work and is especially effective in sparse reward tasks that had been unsolvable by prior work. We also validate its effectiveness in a real-world robotic lane-following task when training with intentional distractions for zero-shot transfer.amore » « lessFree, publicly-accessible full text available August 5, 2026
-
Kyungmin, Kim; Corsi, Davide; Rodriguez, Andoni; Lanier, JB; Parellada, Benjami; Baldi, Pierre; Sanchez, Cesar; Fox, Roy (, 7th Annual Conference on Learning for Dynamics and Control)While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. An effective solution to prevent these failures is to use a shield that validates and adjusts the agent’s actions to ensure compliance with a provided set of safety specifications. For real-world robotic domains, it is essential to define safety specifications over continuous state and action spaces to accurately account for system dynamics and compute new actions that minimally deviate from the agent’s original decision. In this paper, we present the first shielding approach specifically designed to ensure the satisfaction of safety requirements in continuous state and action spaces, making it suitable for practical robotic applications. Our method builds upon realizability, an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can be verified for stateful shields, enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy’s success rate by applying it to a navigation problem and a multi-agent particle environment1. Keywords: Shielding, Reinforcement Learning, Safety, Roboticsmore » « lessFree, publicly-accessible full text available June 4, 2026
-
Rich Abbott; Thomas D. Abbott; Sheelu Abraham; Fausto Acernese; Kendall Ackley; Carl Adams; Rana X. Adhikari; Vaishali B. Adya; Christoph Affeldt; Michalis Agathos; et al (, SoftwareX)
An official website of the United States government
